HomeAboutMailing ListList Chatter /0/0 3.14.131.187

My project: eatmemory - is your computer hungry?

2023-11-07 by: flushy@flushy.net
From: flushy@flushy.net
------------------------------------------------------

I've been playing around with this project that I originally wrote in 
2003 to
help test different task schedules in the CK-Linux kernel - a particular
interesting kernel available to gentoo at the time which focused on 
lowering
latency and improving user interaction.

I've had some needs lately: simulating loads in kubernetes clusters. So 
I
updated the project.

https://github.com/gonoph/eatmemory

# What does it do?

Essentially, it eats memory.

That's it.

But, I've added some stuff to it.

* Consume free memory flag - that's the 1st thing I updated. Previously, 
you
   told it how much RAM you wanted it to eat, and it did it. Now, you can 
tell
   it a % of memory to eat, as well as tell it to only work with memory 
that is
   reported as "free". With this flag, it is a kinder, more gentler eat 
memory.

* Detect memory limits via cgroups v2 - this is a container thing. 
Containers
   can limit the amount of available memory, but it's not immediately 
apparent
   unless you go check if you're bound by a cgroup. So, eatmemory is now 
cgroup
   aware, which means it'll run a little nicer inside a container.

* Memory randomizer flag - darwin aka MacOS compresses memory when it's 
under
   memory pressure - similar to swap. In fact, maybe it's compressed 
swap. I'm
   not sure. All I know, is that when I tried to consume 8GB with 
eatmemory on
   darwin, darwin was too damn smart and compressed most of it. So I 
added code
   to hash the memory to random bits in order to make it really hard for 
the
   compressor to do anything with it.

# What's the purpose?

Well, it's to eat memory. Next Question.

# But why?

Sometimes you need to simulate a workload, or you need to simulate it in 
a
controlled way - like a busy system or cluster of systems. That's what 
this is
for. You load up a few of these, feed it some configurable parameters 
that you
can tune, and then measure the outcome of the whole system as it's under
intense memory pressure.

I also do some tricks to keep it out of swap. Or rather, I simulate 
something
doing work on that memory, and thus the system has to keep pulling it 
out of
swap if it puts it in there.

It looks and feels like a workload that's working on a bunch of memory.

I also added some swanky triggers to my container repository, so you can
immediately pull it via a container:

# Example

what this is doing:

* run a container limiting it to 128M of RAM
* using the default eatmemory setting of 25%
* with the --free (-f) flag to make it nice

```
$ podman run --memory 128M -it quay.io/gonoph/eatmemory:latest -f
Limiting to 25% of available free memory: free=128; total=128
using 25% of available memory 128 megabytes for a target allocation of 
32 megabytes
Consuming 32 megabytes
+ detected running in a cgroup with memory limits.
+ Created 32 megabytes
+ Allocated 33554688 bytes
++ Looped: 1
++ Looped: 2
++ Looped: 3
++ Looped: 4
```

You can use docker, too I guess.

# Anyways

I just figured I'd toss it out there and see if your computers are 
hungry.

--b